专利摘要:
The subject of the present invention is a method of assisting the navigation of a rotorcraft by displaying in flight a dynamic representation of the outside world (Me) in the event of loss of visibility (Brown-out / White-out). Integrated location front maps (35) are generated from front images (6) captured by a front camera set (2). Terrain elevation data (49) is generated from captured images (7,8,9) by complementary sets of side (4,5) and rear (3) cameras. In case of loss of visibility (P), a reconstructed world (Mr) is developed from the integrated location front maps (35). On request (61) of the pilot, an extrapolated world (Mex) is developed by adding the terrain elevation data (49) to the integrated location front maps (35), according to the common date (13) and according to the relative positions. different sets of cameras (2, 3,4, 5) between them.
公开号:FR3016694A1
申请号:FR1400103
申请日:2014-01-20
公开日:2015-07-24
发明作者:Nicolas Belanger;Francois Xavier Filias;Geoffroy Pertuisot;Nicolas Damiani
申请人:Airbus Helicopters SAS;
IPC主号:
专利说明:

[0001] A method of assisting navigation for a rotorcraft, by dynamically displaying a representation of the outside world built in flight instantly and / or offline. The present invention is in the field of navigation assistance systems for rotorcraft by processing images captured in flight and construction in flight then dynamic display of a representation of the outside world from previously captured images. The present invention more particularly relates to such navigation assistance systems capable of displaying a dynamic representation of the outside world in a situation of loss of visibility for the pilot of a rotorcraft. Such a situation of loss of visibility is notably caused by an evolution of the rotorcraft close to the ground, which then typically generates the formation of particle clouds in the outside environment of the rotorcraft. Said clouds of particles, such as dust or snow, for example, cause a situation of loss of visibility commonly qualified by the anglicism "Brown-out / White-out". The rotorcraft are aircraft whose lift is provided by at least one rotor with a substantially vertical axis being able in particular to evolve close to the ground not only at high speeds but also typically at low speeds and / or hovering. As an indication, the low speeds of progression of a rotorcraft are commonly considered less than 50 Kt (50 knots) and the high speeds can reach 125 Kt (125 knots) or 150 kt (150 knots) and more in the case in particular a rotorcraft equipped with propellant propellers with a substantially horizontal axis providing a complementary propulsion in translation of the rotorcraft.
[0002] The rotorcraft have the advantage of being able to evolve under such flight conditions in any environment that may not have been previously arranged or even spotted.
[0003] However, in this context, there is the problem of a "Brown-out / White-out" situation when the rotorcraft moves close to the ground, as an indication at a distance of less than 15 m from the ground. In fact close to the ground, the rotor or rotors equipping the rotorcraft raise clouds of particles generating a loss of visibility for the pilot of the rotorcraft. It appears that it is useful to assist the pilot in navigation in the event of loss of visibility. To provide such navigation assistance, it is known in a "Brown-out / White-out" situation to provide the pilot with a display of an artificial dynamic representation of the outside environment of the rotorcraft, hereinafter referred to as " outside world ". For this purpose, the representation of the displayed external world is constructed by a navigation assistance system from images captured in flight prior to the "Brownout / White-out" situation.
[0004] In this connection, reference can be made to EP1650534 (EADS Deutschland), US7642929 (US AIR FORCE) and US8019490 (APPLIED MINDS), which disclose methods for implementing such navigation assistance systems in a "Brownout" situation. White-out.
[0005] Known solutions implement at least one set of forward facing cameras of the rotorcraft, being jointly pointed towards the same line of sight before. Alternatively, the rotorcraft may be equipped with at least one set of side cameras pointed towards the same lateral line of sight.
[0006] The notions of "frontal", "lateral", "rear", "right" and "left" are commonly identified with respect to the direction of progression in forward translation of the rotorcraft. The cameras of the same game individually provide images of the rotorcraft's external environment to ultimately provide stereoscopic images to the navigation assistance system thereby generating terrain elevation data. The images, hereinafter referred to as "captured images," are sequentially captured by the cameras at a given frequency as digital pixel data. Image processing means, such as texturing means, for example, are optionally used to computationally process the captured images in order to improve the visual effect of displaying the image of the outside world from the captured images. by the cameras. In the absence of a "Brown-out / White-out" situation and at the request of the pilot, the current captured images captured by the cameras may be displayed to provide the pilot with a dynamic display of a representation of the outside world, hereinafter referred to in this specific case as "current outside world". Moreover, the onboard instrumentation of the rotorcraft including an inertial unit provides classically navigation data at least relating to the current state vector of the rotorcraft. The state vector of the rotorcraft is particularly representative of the relative or absolute position, the speed of the rotorcraft and more particularly the ground speed, the orientation and / or the change of attitude of the rotorcraft as and when of his progress. The navigation data of the rotorcraft is used by the navigation assistance system to construct a dynamic representation of the outside world displayed in a "Brown-out / White-out" situation. The current navigation data of the rotorcraft is integrated into the images as they are captured by generating metadata that are used by the navigation assistance system to construct and display a representation of the outside world according to the variation of the current state vector of the aircraft. rotorcraft, hereinafter referred to in this specific case as "reconstituted world". For this purpose, simultaneous mapping and localization means generate by calculation integrated location maps by implementing known calculation processes, such as, for example, the calculation processes commonly referred to as SLAM (according to the acronym Simultaneous). Localization And Mapping) or CML (Concurrent Mapping and Localization). Simultaneous mapping and location computation processes are based on an incremental construction of integrated location maps using predictive computing algorithms that typically implement Kalman filters. The integrated rental maps are constructed from the metadata integrating the captured images to which the navigation data sequentially identified by the on-board instrumentation are assigned simultaneously with the sequential capture by the cameras of the captured images. The images captured individually at the same time by the cameras of the same game are used to provide the navigation assistance system with the metadata relating to the territories captured at a given moment by the cameras of the same game, hereinafter designated by "captured territories". The integrated location maps are thus successively generated in flight by a simultaneous mapping and location calculation process and are stored at a given time frequency in an integrated location map database. The integrated location-based map database is exploited by a data processing unit to generate the construction and dynamic display of the reconstructed world in flight. For this purpose, the processing unit computationally compares the current navigation data of the rotorcraft with the navigation data integrated in the various integrated location maps stored in the database to generate the display of the reconstructed world. More particularly, the processing unit identifies and extracts the integrated location maps according to the variation of the current navigation data of the rotorcraft to build and display the reconstructed world with a display dynamic whose evolution varies according to the data of the rotorcraft. current navigation of the 15 rotorcraft. For example, reference may be made to document EP2133662 (HONEYWELL INT INC) which discloses such methods of constructing and displaying a reconstructed world conforming to the external environment of a host platform by a system for assisting the environment. navigation implementing a simultaneous mapping and location calculation process. In this context, the reconstituted world may be displayed to provide the pilot with navigation assistance in the event of a situation of sudden loss of visibility, such as in the presence of fog sheets or clouds of particles surrounding the rotorcraft, especially in situation of "Brown-out / White-out". For this purpose, the navigation assistance system incorporates means for detecting a "Brown-out / White-out" situation which causes the implementation of the processing unit and the display of the world. reconstructed to compensate for the loss of visibility by the pilot of the rotorcraft's external environment.
[0007] A detection of a situation of "Brown-out / White-out" is for example carried out: -) by locating the presence of a cloud of particles from an analysis of the pixels, more particularly their density, defining the captured images. The analysis of the pixels defining the captured images must be carried out according to modalities making it possible to identify the "Brown-out / White-out" situation as quickly as possible, while being sufficiently reliable to prohibit or authorize the display of the image. reconstituted world when needed. -) by calculation comparison between the rotorcraft's current floor height and a height-ground threshold classically generating a "Brown-out / White-out" situation, such that the separation distance between the rotorcraft and the ground is less than 15 m. Such a solution is usually preferred because it makes it possible to quickly obtain the identification of a "Brown-out / White-out" situation from modest computing capacities. -) more simply, by activating a display controller activatable by the pilot confronted with a situation of "Brown-out / White-out" or wishing, in a situation of visibility, to have the reconstituted world in reality augmented by superimpositions of virtual images to the actual vision. In this context, the navigation assistance provided by the construction and display of the reconstituted world requires considerable computing power and memory. Such computational and memory power must be available on board the rotorcraft to enable the construction and display of the reconstituted world as quickly as possible in accordance with the variation of the current navigation data of the rotorcraft, with reliability, visual quality and an evolution of the dynamic display performance.
[0008] More particularly, the calculation operations carried out by the navigation assistance system require significant computation and memory capacities to provide optimized computation frequencies for: -) capture and sustained-sequence processing of the images captured by the cameras, to provide a satisfactory visual quality of the captured images and to generate the metadata from which the integrated location maps are developed, -) to obtain accurate, as many as possible, rapidly available integrated location maps, in order to provide to the processing unit sufficient resources to build and display the reconstructed world and finally to provide the pilot with reliable and comfortable navigation assistance, -) a display of the reconstructed world with a refresh of 15 images displayed at sustained rates, for get an evolution of the dynam display of the reconstituted world giving said display a comfortable and safe fluidity for the pilot. In addition, the reliability, the relevance and the fluidity of the evolution of the display dynamic of the reconstructed world are dependent on the immediate and timely availability of all the information to be correlated with each other during the various operations leading to the construction and display of the reconstituted world. The result is a constant search for a high-performance navigation assistance system providing a reliable, relevant and reliable display of the reconstructed world, with optimized quality and evolution of the dynamics of this display, while being subject to a constraint of limited capacity of computing power and memory necessary to obtain such a display quickly.
[0009] In the field of computational image processing, it is conventionally used processing units such as for example of the FPGA (Field Programmable Gate Array) type. The FPGA processing units or analog processing units 5 allow the logic circuits of the calculation means implemented by the processing units to be reconfigured. According to the choices made by the programmers of the operating modes of the processing unit, the calculation logic circuits are configured to optimize the exchange between them and the processing of the data. In the field of navigation assistance systems providing a dynamic display of the outside world, a common solution for optimizing the exploitation of the capabilities of on-board calculation and memory means on board a rotorcraft is to restrict the operations of calculation to be performed by exploiting an integrated localization map database developed on the ground prior to the rotorcraft flight mission. Such a database is constructed from images captured by any rotorcraft during previous flight missions. Such a solution has the advantage of reducing the calculation operations to be performed in flight and consequently makes it possible to reserve the calculation and memory capabilities of the on-board navigation assistance system on board the rotorcraft for the construction and the display of the restored world. However, such a solution is not satisfactory because of the loss of relevance of the information provided by the integrated location-based map database developed prior to the rotorcraft flight mission. Furthermore, the operation of an on-board navigation assistance system providing, if necessary, a dynamic display of the representation of the outside world deserves to be optimized. It is for example advantageous to exploit on the ground the information stored and / or generated during a flight mission by the navigation assistance system to analyze a posteriori said flight mission and / or territories overflown by the rotorcraft. However, in addition to such potential exploitations, it seems appropriate to look for and implement other possible uses of the navigation assistance system. Such research must of course be conducted taking into account the constraints related to the search for a powerful navigation assistance system including limited computing power and memory nevertheless able to provide in time the necessary information to the construction and display of the reconstructed world within the constraints mentioned above. In this context, the object of the present invention is to propose a method of implementing a navigation assistance system by displaying a representation of the outside world in the event of loss of visibility, such as in particular generated by a "Brown-out / White-out" situation. On the basis of the observation that has just been made and to which the approach of the present invention belongs, it is more particularly sought to optimize the operation made in flight of the navigation assistance system taking into account a satisfactory compromise to find between the various constraints mentioned above. The method of the present invention is a method of implementing a rotorcraft navigation assist system by construction and by displaying in flight a dynamic representation of the outside world on a screen. The method of the present invention comprises the following operations performed in flight: -) at the pilot's request, sequentially capture frontal images relating to the rotorcraft's external environment. Such a request from the driver is for example effected by activation of an image capture command button dedicated to this request.
[0010] The frontal images are captured by at least one set of front cameras on the rotorcraft being oriented towards a common front line of sight for classically providing stereoscopic images to a processing unit of the navigation assistance system. -) sequentially compute integrated location frontal maps from the frontal images and navigation data of the rotorcraft. Conventionally, such navigation data include in particular at least the rotorcraft state vector and are typically provided by the rotorcraft edge instrumentation simultaneously with the sequential capture of the frontal images. Then, said integrated location front-end maps are stored in a first database. -) identify a situation of loss of visibility causing the construction by calculation and the dynamic display on the screen of a first dynamic representation of the outside world, said as previously defined "reconstituted world". The construction and display of the reconstructed world is operated from a sequential extraction of the integrated location front maps stored in the first database, according to the variation of the current navigation data of the rotorcraft compared to the navigation data integrated in the maps. frontal with integrated localization. According to the present invention, such a method is mainly remarkable in that it comprises the following operations: -) simultaneously with the capture of the frontal images, sequentially capturing complementary images relating to the outside world by complementary games of cameras. The stereoscopic images provided individually by the cameras of the different sets of cameras equipping the rotorcraft are hereafter designated "captured images". The complementary games of cameras comprise at least a set of straight side cameras, at least one set of left side cameras and at least one set of rear cameras. Cameras of the same complementary set of cameras are of course oriented towards a common line of sight to conventionally provide stereoscopic images of territories captured at a given moment by a given set of cameras. More particularly, it is of course understood that the cameras of the right side camera set are oriented towards a common right side line of sight, the cameras of the left side camera set are oriented towards a common left side line of sight and the cameras of the side camera. Rear camera sets are oriented towards a common backsight line. -) allocate by calculation to each of the captured images of the dating data provided by the instrumentation and original data relating to the identification of the set of cameras (2,3,4,5) from which 20 are derived the captured images. Such an affectation operation by computation of dating data and digital origin data to the captured images is generating metadata. Such metadata are generated for each of the captured images, including frontal metadata specific to the front images and including complementary metadata specific to the complementary images respectively provided by a given set of complementary cameras. storing the complementary metadata in a second database and sequentially computationally developing said integrated location frontal maps. -) on request of the pilot, build by calculation and display on the screen a second dynamic representation of the outside world, called "extrapolated world". Such a request from the driver is for example made by activation of a command button, said display of the extrapolated world, dedicated to this request. The extrapolated world is composed of integrated location frontal maps sequentially extracted from the first database according to the variation of the current navigation data of the rotorcraft compared to the navigation data Zo integrated in the maps with integrated location. Complementary terrain elevation data from the complementary metadata is added to said integrated location front-end maps extracted from the first database during the construction and dynamic display of said extrapolated world. Said addition is made by correlating the dating data with the original data respectively assigned to the front images and the complementary images. These arrangements are such that said correlation 20 identifies, according to the variation of the current navigation data of the rotorcraft, the relative positions between them of integrated location frontal maps and additional data of elevation of terrain from the captured images provided to the same moment by the different camera games. The terrain elevation supplemental data is potentially at least in part constituted by the complementary metadata that is selectively extracted from the second database and is integrated with the integrated location front-end maps.
[0011] Complementary terrain elevation data is still potentially at least partly constituted by pre-built integrated location-based maps stored in a third database.
[0012] The integrated location supplemental maps are constructed from the complementary metadata integrating the rotorcraft navigation data provided by the on-board instrumentation simultaneously with the sequential capture of the complementary images.
[0013] The prior construction of the integrated location supplementary cards is in particular carried out subject to a calculation capacity of the navigation assistance system, said calculation capacity being evaluated available with respect to a threshold of predetermined calculation capacity. .
[0014] According to a preferred embodiment, the extrapolated world is potentially constructed primarily from complementary maps with integrated localization when they are available, if not from the complementary images. More particularly, as a result of the pilot's request for the construction and dynamic display of the extrapolated world, the method further comprises the following operations: -) check by calculation, as the dynamic evolution of the display of the extrapolated world, the availability of complementary maps with integrated localization able to be correlated with the integrated location frontal maps to construct and display the extrapolated external world, and then-) in the event of such availability of complementary maps to integrated localization, construct and display the extrapolated world, if not otherwise: -) develop the integrated localization supplementary maps necessary for the construction and display of the extrapolated world, subject to the calculation capacity of the assistance system the evaluated navigation available in relation to the calculation capacity threshold, otherwise: -) extract from r compute the second database the additional metadata needed to build and display the extrapolated world. According to a preferred embodiment, the pilot has the choice either to wait for the computing capacities to be available to build the integrated localization secondary boards necessary for the construction and dynamic display of the extrapolated world, or in the case of urgently to exploit complementary metadata directly to build and display the extrapolated world. Such a choice of an immediate exploitation of the complementary metadata by the pilot is notably achieved by activation of a control member, said stop of a development of complementary maps with integrated localization, dedicated to this function. More particularly according to a preferred embodiment, the extraction operation by calculation of the second database of complementary metadata necessary for the construction and dynamic display of the extrapolated world is placed under the control of a request. of the pilot. Furthermore, it is preferably proposed to sort the images captured by the different camera sets to exclude images analyzed as irrelevant. More particularly, during the operation of sequentially capturing the captured images, the method comprises a step of selecting by calculation of the captured images by applying at least one predefined relevance criterion relating to a territorial continuity between the different territories respectively captured captured at the same time by the different sets of cameras equipping the rotorcraft. The captured images deduced irrelevant with respect to said criterion of relevance are removed prior to their processing by the navigation assistance system. Said at least one criterion of relevance is preferably predefined according to at least one of the following calculation parameters: -) a first threshold of separation between the rotorcraft and a captured territory which is captured at a given moment by any game cameras, and -) a second threshold distance between different captured territories respectively captured at the same time by the different sets of cameras equipping the rotorcraft. The information relating to the distances of distance between the rotorcraft and any said captured territory are conventionally provided by the camera games generating stereoscopic images. According to an advantageous embodiment, the method further comprises methods of identifying territories of interest making it possible to establish and display a flight plan between a current position of the rotorcraft and at least one said territory of interest. For this purpose, during the sequential capture operation of the captured images, the method further comprises a step of locating at least one captured territory, said territory of interest. The identification step is performed by applying at least one predefined criterion of interest relating to the ability of said territory of interest to be exploited by the rotorcraft. At the end of said tracking step, a piece of interest is selectively integrated into the metadata from which the captured images relating to said territory of interest are derived. As a result and according to a preferred embodiment of the invention, the method comprises, on request of the pilot in particular operated by activation by the pilot of a control member said development of a flight plan dedicated to this function, an operation of drawing up a flight plan between the current position of the rotorcraft and said at least one territory of interest. Said flight plan is potentially displayed superimposed on the representation of the outside world displayed by the screen. Such an overprint is potentially displayed on the reconstructed world and / or on the extrapolated world. According to one embodiment, said tracking step can be performed by the pilot of the rotorcraft. The data of interest is integrated into the metadata on request of the pilot, in particular by activation of a control member, called the marking of the captured images, dedicated to this function. According to another embodiment, taken alone or in combination with the previous one, said registering step is potentially an algorithmic processing operation of the pixels of the pattern recognition type images. In this case, the integration of the data of interest with the metadata is performed by calculation according to a recognition of at least one predefined form on the images captured following said algorithmic processing operation of the pixels of the images. Said recognition of the predefined form is for example conventionally carried out in accordance with the implementation of a process for algorithmically processing the pixels of the pattern recognition type images, by comparison between the captured images and such predefined forms listed in a base pre-established schedule data. According to various exemplary embodiments considered in isolation or in combination, said at least one criterion of interest is at least one of the following criteria of interest relating to the aptitude of a territory of interest to be constituted for the rotorcraft: a landing zone, a refuge zone, a danger zone and / or an intervention zone. Moreover, the identification of said situation of loss of visibility is potentially carried out according to at least one of the following operations: -) a calculation analysis operation of the pixels defining the captured images, -) a comparison operation by calculation between the current ground height of the rotorcraft and a height-ground threshold identified generating a situation of loss of visibility (Brown-out / White-out), and / or -) at the request of the pilot faced with a situation loss of visibility, such as by activation of a display controller. It is proposed in the context of a calculation analysis of the pixels defining the captured images, to analyze more particularly the intensity variance of the pixels of the captured images to identify said situation of loss of visibility. More particularly and according to an advantageous form of implementation of the analysis operation by calculating the pixels defining the captured images, the method comprises a control operation of the reconstituted world display implementing the following steps: ) sequentially sampling by calculation at a given frequency a predefined number of frontal images, called sampled frontal images, -) calculating a sliding range of a predefined number of said sampled frontal images, -) calculating a first intensity variance pixels of a median sampled frontal image of said range and computationally comparing said first calculated variance with a first pixel intensity variance threshold, then io -) in the case where said first calculated variance is greater than said first variance threshold; variance, repeat the sampling step of the frontal images, otherwise in the opposite case utoriser construction and dynamic display of the reconstituted world. As a result, the method comprises an extrapolated world display control (45) operation (45) implementing the following steps: -) in the case of a building permit for the reconstituted world as a result of the control of the display of the reconstructed world, calculating second pixel intensity variances of the captured images respectively captured at a given instant by the different camera sets and considered two by two, -) computationally comparing said second variances with a second threshold of pixel intensity variance, then in the case where at least one of said second variances is greater than the second variance threshold, repeating the display control operation of the reconstructed world, especially since the sampling step frontal images, if not otherwise allow the construction and display of the extrapolated world, then -) compare the said second variances with a third me pixel intensity variance threshold of a value greater than said second variance threshold, then in the case where said second variances are greater than the third variance threshold prohibit the construction and display of the extrapolated world while maintaining the execution of the display control operation of the reconstructed world, otherwise in the opposite case maintain the construction and dynamic display of the extrapolated world. Exemplary embodiments of the present invention will be described in relation to the figures of the attached plates, in which: FIG. 1 is composed of two diagrams (a) and (b) associated, illustrating for diagram (a) a rotorcraft equipped with a navigation assistance system of the present invention and for the diagram (b) of the image acquisition and dynamic display modes of different representations of the outside world by implementing said navigation system; assistance to navigation. FIG. 2 is composed of two schemas (c) and (d) associated, illustrating for diagram (c) modes of acquisition of images along a route taken by the rotorcraft represented in the diagram (FIG. a) of fig.1 and for the diagram (d) processes of such an acquisition of images and construction and dynamic display of different representations of the outside world illustrated in diagram (b). FIG. 3 is a logical representation of methods of analysis by calculating the pixels of the images acquired by the navigation assistance system equipping the rotorcraft illustrated in FIG. The common elements and common operations shown in the different figures are respectively designated by common names being identified with the same numbers and / or letters of reference. In FIG. 1, a rotorcraft 1 is equipped with a navigation assistance system by display in flight of different dynamic representations of the outside environment of the rotorcraft, said outside world Me. For this purpose, the rotorcraft 1 is equipped with different sets of cameras 2,3,4,5 to acquire images captured 6,7,8,9 by the cameras relating to the different territories 10 successively 10 overflown by the rotorcraft 1 during a trip. The captured images 6,7,8,9 are stereoscopic images from the images provided individually by the cameras of a given set of cameras, from which stereoscopic images are constructed and displayed in flight different representations of the outer worlds Me. in particular, the rotorcraft 1 is equipped with: -) at least one set of front-facing forward-facing cameras 2 of the rotorcraft 1 in the direction S of forward translation movement of the rotorcraft 1. The cameras of said at least one front camera set 2 are oriented towards a common front sight line for capturing stereoscopic front images 6. -) of at least one rear camera set 3 facing the rear of the rotorcraft 1 along the direction S of progression in forward translation of the rotorcraft 1. The cameras of said at least one set of rear cameras 3 are oriented towards a common rear line of sight for capturing stereoscopic rear images. 7. -) at least one set of right side cameras 5 oriented laterally to the right of the rotorcraft 1 in the direction S forward translation of the rotorcraft 1. The cameras of said at least one set of side cameras lines 5 are oriented towards a common right lateral line of sight for capturing stereoscopic right lateral images 9 .-) of at least one set of left lateral cameras 4 oriented laterally to the left of the rotorcraft 1 in the direction S of progression in translation Forward to the rotorcraft 1. The cameras of the at least one set of left side cameras 4 are oriented to a common left side line of sight to capture stereoscopic left side images 8. The notions of stereoscopic straight side images 8, 10 "stereoscopic left side images" 8 and "stereoscopic rear images" 7 are subsequently grouped into a global notion Additional images "7,8,9. Furthermore, all the stereoscopic images provided by the different camera sets, including the front images 6 and the complementary images 7,8,9, are referred to as "captured images". The rotorcraft 1 is also conventionally equipped with an instrumentation 11 commonly generating navigation data 12 (12-1; 12-2; 12-3) comprising at least the current state vector of the rotorcraft 1 representative of its relative or absolute position, orientation and / or attitude change as it progresses. The aircraft instrumentation 11 of the rotorcraft 1 also provides dating data 13 and original data 14 relating to the identification of the camera set 2,3,4,5 from which the captured images are derived. 8.9. Such original data 14 make it possible to relate the different images captured 6, 7, 8, 9 to the 2,3,4,5 camera sets from which they are respectively derived and consequently can be used to deduce by calculation. the relative positions of the different images captured 6,7,8,9 relative to each other. On request 61 of the pilot, the cameras of the different sets of cameras 2,3,4,5 are implemented to provide the images The image acquisition unit 19 processes the captured images 6, 7, 8, 9, the navigation data 12, the data. of date 13 and the original data 14 to generate metadata 15,16,17,18 operated by a processing unit 23 to build various representations of the outside world Me, including a reconstructed world Mr and / or a world extrapolated Mex . The different metadata include front-end metadata 15, back-end metadata 16, right-hand lateral metadata 17, and left-side-left metadata 18. Rear-image-specific rear metadata 16, straight-side right-side metadata 17, and side metadata left 18 specific to the left lateral images are grouped under the notion of "complementary metadata" 16,17,18. The reconstructed world Mr and / or the extrapolated world Mex are displayed by a screen 20 by means of a display unit 21 exploiting for this purpose the information provided by the processing unit 23. On request 22 of the pilot a current outside world Mc is potentially displayed on the screen 20. The current outside world Mc is displayed through the display unit 21, potentially either from the front images 6 alone or from the set of images captured 6,7,8,9 by the different 2,3,4,5 camera games.
[0015] In case of display of the current world Mc current from the images captured 6,7,8,9 by the different 2,3,4,5 camera games, the various images captured 6,7,8,9 are potentially displayed in separate display windows positioned on the screen 20 relative to each other according to the relative positions on board the rotorcraft 1 of the different sets of cameras 2,3,4,5. In Fig.2, an example of path T made by the rotorcraft 1 is shown in diagram (c). As the rotorcraft 1 progresses, different sets of cameras 2,3,4,5 sequentially provide at given frequency the captured images 6,7,8,9 to the image acquisition unit 19. During the capture of the captured images 6, 7, 8, 9, a step of selecting 24 captured images 6, 7, 8, 9 is performed by calculation to validate their exploitation by avoiding cluttering the memory means and / or to saturate the calculation capabilities of the navigation assistance system with images considered unusable with respect to a predefined relevance criterion. Such a criterion of relevance 25 is in particular relating to a territorial discontinuity D between the different captured territories 10 which are captured at the same time by the different sets of cameras 2,3,4,5. At the end of this selection step 24, the images captured 6,7,8,9 deduced by calculation that are not relevant with respect to the criterion of interest 25 are deleted. Furthermore, during the acquisition of the captured images 6, 7, 8, 9, a tracking step 26 of a territory of interest 27 is carried out by the pilot of the rotorcraft and / or from an analysis by calculation of captured images 6,7,8,9. Such a territory of interest 27 is evaluated by application of at least one predefined criterion of interest 28 relating to an exploitation of the territory of interest 27 by the rotorcraft 1, such as a faculty of the territory of interest 27 to be constituted a landing zone, a refuge zone, a danger zone or even an intervention zone. In the case of identification of a territory of interest 27, a digital data item 29 is integrated in the metadata 15, 16, 17, 18. The integration of the data of interest 29 is potentially performed manually on the request of the pilot and / or is automatically generated by algorithmic processing means 31 of the captured images 6,7,8,9 of the type of pattern recognition. The identification of the territories of interest 27 can be exploited later to develop a flight plan 32 on request 33 of the pilot. Said flight plan 32 is developed from the current position of the rotorcraft 1 to at least one territory of interest 27 previously identified. To assist the pilot in the development of the flight plan 32, the territories of interest 27 can be referenced according to their criterion of interest in a list of territories of interest 27 potentially generated and displayed by an annex screen 20 '. During the tracking step 26. A texturing step 34 of the captured images 6, 7, 8, 9 can be performed to optimize the visual quality and viewing comfort of the outside world displayed by the screen 20. It will be noted at this stage of the description that such a texturing step 34 is preferably performed by the image acquisition unit 19 or can be performed by the processing unit 23 when preparing maps. with integrated localization resulting from the images captured 6,7,8,9, or even by the display unit 21 when displaying the representations of the outside world Me. The metadata 15,16,17,18 are transmitted to the processing unit 23 for the co reconstruction of the reconstructed world Mr and / or world extrapolated Mex.
[0016] Integrated location front-end maps 35 are primarily developed from front-end metadata 15 as front-end images 6 are captured and are successively stored in a first database 36.
[0017] The complementary metadata 16, 17, 18 are stored waiting in a second database 37 in order to optimize the availability of the calculation capabilities of the navigation assistance system in order to develop the integrated location front-end maps as quickly as possible. In the case of availability of the calculation capabilities 50 of the navigation assistance system with respect to a calculation capacity threshold 51 taking into account the priority constraints for preparing the integrated location front maps 35, Complementary integrated location maps 38 are potentially derived from the complementary metadata 16,17,18. The integrated location supplementary cards 38 are stored in a third database 39 as and when they are developed depending on the availability of the calculation capabilities of the navigation assistance system. The development process of the integrated location supplementary cards 38 is likely to be interrupted temporarily depending on the immediate availability of the calculation capabilities of the navigation assistance system. It is preferred at this stage of preparing the integrated localization supplementary maps 38 to exploit the complementary metadata 16, 17, 18 according to a path taking into account the complementary images 7, 8, 9 most recently captured towards the complementary images. 7.8,9 the oldest captured. In case of identification of a sudden loss of visibility of the external environment of the rotorcraft 1 by the pilot, such as in a situation of fog sheets or in the event of formation of particle clouds such as in a situation of " Brown-out / White-out ", the processing unit 23 is able to generate a reconstructed world display Mr constructed from previously integrated integrated location front maps.
[0018] A situation of loss of visibility P is potentially identified on request 40 of the pilot confronted with such a situation, or by a calculation unit 41 implementing an analysis operation 42 of the pixel intensity variance of the captured images 6, 7,8,9. The loss of visibility P is still potentially identified by the calculation unit 41 making a comparison 43 between the current floor height of the rotorcraft 1 provided by the edge instrumentation 11 and a predefined height-ground threshold. The analysis operation 42 of the pixel intensity variance of the captured images 6, 7, 8, 9, more particularly comprises a first operation of control 44 of the reconstituted world display, which is followed by a request from extrapolated world display Mex a second control operation 45 of the world display extrapolated Mex. In case of identification of a loss of visibility P, a construction operation 46 of the reconstituted world Mr is implemented by the processing unit 23, the reconstituted world Mr being then displayed on the screen 20 via of the display unit 21. For this purpose, integrated location front maps 35 are extracted from the first database 36 according to the current navigation data 12-2 of the rotorcraft 1 provided by the on-board instrumentation 11 and compared with the different navigation data 12-1 respectively integrated with the frontal metadata 15 from which the integrated location frontal maps 35 are derived. Moreover, on request 47 of the pilot, a construction operation 48 of the extrapolated Mex world is implemented by the processing unit 23, the extrapolated world Mex being then displayed on the screen 20 via the display unit 21. The extrapolated world Mex is constructed from the integrated location-based front maps 35 according to the current navigation data 12-3 of the rotorcraft 1, such as according to the construction methods of the reconstructed world Mr, to which front-end maps with integrated location 35 are associated with the additional elevation data 49 from complementary metadata 16,17,18. The terrain elevation supplemental data 49 is associated with the integrated location front maps 35 by correlating the dating data 13 with the original data 14 respectively integrated with the front metadata 15 and the complementary metadata 16,17,18 . More particularly, as previously referred to the variation of the current navigation data 12-3 of the rotorcraft 1 is exploited 15 to extract the integrated location front maps 35. According to the original data 14 and the dating data 13, the complementary metadata 16, 17, 18 from the complementary images 7, 8, 9 captured simultaneously with the front images 6 from which the integrated location front maps 35 are taken are extracted from the second database 37 to form the additional data of elevation of terrain. The extrapolated world Mex is then displayed by juxtaposing the integrated location front maps 35 with the additional terrain elevation data 49 associated with the integrated location front maps 35. according to the original data identifying their position relative to each other to the others according to the relative positions of the different 2,3,4,5 camera sets on board the rotorcraft 1. The terrain elevation supplement data 49 are potentially constituted of complementary maps with integrated localization 38 previously prepared and extracted from the third database 39 and / or are complementary metadata 16,17,18. More particularly, in the absence of integrated location-based supplementary maps 38 available, the missing integrated localization supplementary maps 38 are prepared subject to the availability of the computing capabilities 50 of the processing unit 23 with respect to the computing capacity threshold. 51 or the complementary metadata 16,17,18 can be directly exploited to be added to the integrated location front maps 35 from which the extrapolated world Mex is constructed. A defect of complementary maps with integrated localization 38 may also result from a request 52 from the pilot to rapidly dispose of the world extrapolated Mex without having to wait for the construction of all the complementary integrated location maps 38 necessary and / or to accelerate the evolution of the display dynamics of the Mex extrapolated world based on the availability of computing capabilities 50 of the navigation assistance system. According to an advantageous embodiment, temporal information can be displayed to provide the pilot with a decision aid with respect to the execution of said request 52 to quickly dispose of the world extrapolated Mex. Such temporal information is advantageously relative to the computation time required to obtain the integrated localization supplementary cards 38. In FIG. 3, the execution of the construction and display operations of the reconstituted world Mr and / or Mex extrapolated world are subjected to said analysis operation by the pixel intensity variance processing unit of the captured images 6,7,8,9.
[0019] In a first control operation 44, a sequential sampling step 53 of the front images 6 is performed, a sliding range 54 of a predefined number of sampled front images 55 being defined. A first pixel intensity variance of a median sampled front image 56 is calculated and compared with a first pixel intensity variance threshold 57. In the case where said first calculated variance is greater than said first variance threshold 57, the step of sampling the front images 6 is repeated. In the opposite case, the situation of loss of visibility P is identified. The construction and display of the reconstituted world Mr are then allowed. A second control operation 45 is performed by the processing unit 23 in the event of generation of the request 47 of the pilot to provoke the construction and display of the extrapolated world Mex. Under the condition of an authorization for the construction and display of the reconstructed world Mr, a frequency-specific sequential computation of second pixel intensity variances 58 of captured images 6, 7, 8, 9 at one instant is performed. given by the different camera games 2,3,4,5 and considered two by two. Said second variances 58 are then each compared with a second pixel intensity variance threshold 59. In the case where said second variances 58 are greater than the second variance threshold 59, the sampling of the front images 6 is reiterated. In the opposite case, the construction and display of the extrapolated world Mex are allowed. The maintenance of the Mex extrapolated world construction and display is conditioned on the continuation of an analysis of said second variances 58 which are compared with a third pixel intensity variance threshold 60 of a value greater than said second threshold. variance 59.
[0020] In the case where said second variances 58 are greater than the third variance threshold 60, the construction and display of the Mex extrapolated world is prohibited to free the computing and memory capacity providing the construction and display of the reconstituted world. In the opposite case, the construction and display of the extrapolated world Mex are maintained.
权利要求:
Claims (15)
[0001]
REVENDICATIONS1. A method of implementing a rotorcraft navigation assistance system (1) by constructing and displaying in flight a dynamic representation of the outside world (Me) on a screen (20), said method comprising the following operations performed in flight: -) on request (61) of the pilot, sequentially capture frontal images (6) relating to the external environment of the rotorcraft (1) by at least one set of front cameras (2) on board the rotorcraft (1) 10 being oriented towards a common forward line of sight, -) sequentially computation of the integrated location frontal maps (35) from the frontal images (6) and navigation data (12-1). of the rotorcraft (1) provided by an on-board instrument (11) simultaneously with the sequential capture of the front images (6), then store said integrated location front maps (35) in a first database (36), -) identify a situation n loss of visibility (P) causing the construction by calculation and the display on the screen (20) of a dynamic representation of the outside world (Me), called reconstituted world (Mr), from an extraction sequential of the integrated location front maps (35) stored in the first database (36) according to the variation of the current navigation data (12-2) of the rotorcraft (1) compared to the navigation data (12-1) integrated with the integrated-location front-end maps (35), characterized in that the method comprises the following operations: -) simultaneously with the capture of the front images (6), sequentially capturing complementary images (7, 8, 9) relating to the external environment of the rotorcraft (1) by complementary games of cameras (3,4,5), the stereoscopic images provided individually by the cameras of the different sets of cameras (2,3,4,5) equipping the rotorcraft (1) being said captured images s (6,7,8,9), the complementary sets of cameras (3,4,5) comprising at least one set of straight side cameras (5), at least one set of left side cameras (4) and least one set of rear cameras (3), the cameras of the same complementary set of cameras (3,4,5) being oriented towards a common line of sight, -) allocate by calculation to each of the captured images (6,7 , 8,9) dating data (13) provided by the on-board instrumentation (11) and original data (14) relating to the identification of the set of cameras (2,3,4,5) of which from the captured images (6,7,8,9), generating for each of the images captured (6,7,8,9) metadata (15,16,17,18) including frontal metadata (15) 15 specific to the front images (6) and whose complementary metadata (16,17,18) specific to the complementary images (7,8,9) respectively provided by a set of complementary cameras (3,4,5) given, -) to store metadata complements (16, 17) in a second database (37) and sequentially compute said integrated location (35), on request maps (47) of the driver, construct by calculation and display on the screen (20) a dynamic representation of the outside world (Me), said extrapolated world (Mex), the extrapolated world (Mex) being composed of the integrated location front maps (35) sequentially extracted from the first database (36); ) as a function of the variation of the current navigation data (12-3) of the rotorcraft compared with the navigation data (12-1) integrated with the integrated location front maps (35), to which extracted local location maps (35) are added. additional terrain elevation data (49) from the complementary metadata (16,17,18), said addition being made by correlating the dating data (13) with the original data; ine (14) respectively assigned to the front images (6) and the complementary images (7,8,9), said correlation identifying according to the variation of the current navigation data (12-3) of the rotorcraft (1) the relative positions between them integrated location front maps (35) and additional elevation data (49) from captured images (6,7,8,9) provided at the same time by the different camera sets (2). , 3,4,5).
[0002]
Method according to claim 1, characterized in that the additional terrain elevation data (49) are at least partly constituted by the complementary metadata (16,17,18) which are selectively extracted from the second base of data (38) and assistant to integrated location frontal maps (35).
[0003]
3. Method according to claim 1, characterized in that the additional terrain elevation data (49) are at least partly constituted by integrated localization supplementary maps (38) previously constructed and stored in a third database ( 39), the integrated location supplementary maps (38) being constructed from the complementary metadata (16,17,18) integrating the navigation data (13-1) of the rotorcraft (1) provided by the on-board instrumentation (11). ) at the same time as the sequential capture of the complementary images (16, 17, 18), the prior construction of the integrated location supplementary cards (38) is performed subject to a computing capacity (50) of the navigation assistance system evaluated available with respect to a predetermined calculation capacity threshold (51).
[0004]
4. Method according to claims 2 and 3, characterized in that as a result of the request (47) of the pilot relating to the construction and dynamic display of the extrapolated world (Mex), the method further comprises the following operations: -) check by calculation, as the dynamic evolution of the extrapolated world display (Mex) evolves, the availability of complementary maps with integrated localization (38) able to be correlated with the frontal maps with integrated localization ( 35) to build and display the extrapolated world (Mex), then 10 -) in case of such availability of complementary maps with integrated localization (38), to construct and display the extrapolated world (Mex), otherwise if not: -) to develop the integrated location supplementary maps (38) necessary for the construction and display of the extrapolated world 15 (Mex) subject to a calculation capability (50) of the navigation assistance system the available evaluation against the calculation capacity threshold (51), otherwise: -) extract by calculation from the second database (37) the additional metadata (16,17,18) necessary for the construction and to the display of the extrapolated world (Mex).
[0005]
5. Method according to claim 4, characterized in that the extraction operation by calculation of the second database (37) complementary metadata (16,17,18) required for construction and dynamic display 25 of the extrapolated world (Mex), is placed under the dependence of a request (52) of the pilot.
[0006]
6. Method according to any one of claims 1 to 5, characterized in that during the sequential capture operation of the captured images (6,7,8,9), the method comprises a selection step (24). ) by calculation of the images captured (6,7,8,9) by application of at least one predefined criterion of relevance (25) relative to a territorial continuity between different territories, said captured territories (10), respectively captured at the same instant by the various camera games (2,3,4,5), the captured images (6,7,8,9) deduced irrelevant with respect to said criterion of relevance (25) being removed prior to their treatment by the system navigation assistance.
[0007]
7. Method according to claim 6, characterized in that said at least one criterion of relevance (25) is predefined according to at least one of the following calculation parameters: - a first threshold of separation between the rotorcraft ( 1) and a captured territory (10) which is captured at a given instant by any set of cameras (2,3,4,5), and -) a second threshold of distance between different captured territories (10) respectively captured at the same time by the 20 different sets of cameras (2,3,4,5) equipping the rotorcraft (1).
[0008]
8. Method according to any one of claims 1 to 7, characterized in that during the sequential capture operation of the captured images (6,7,8,9), the method comprises a tracking step (26). ) of at least one captured territory (10), said territory of interest (27), by application of at least one predefined criterion of interest (28) relating to the suitability of the said territory of interest (27) to be exploited by the rotorcraft (1), after which identification step (26) a data of interest (29) is selectively integrated with the metadata (15, 16, 17, 18) from which the captured images are derived ( 6,7,8,9) relating to said territory of interest (27).
[0009]
9. The method of claim 8, characterized in that on request (33) of the pilot, the method comprises an operation of developing a flight plan (32) between the current position of the rotorcraft (1) and said at least one a territory of interest (27), said flight plan (32) being superimposed on the representation of the outside world (Me) displayed by the screen (20).
[0010]
10. Method according to any one of claims 8 and 9, characterized in that said identifying step (26) being performed by the pilot of the rotorcraft (1), the data of interest (29) is integrated with the metadata (15). , 16,17,18) upon request (30) of the pilot.
[0011]
11. The method as claimed in claim 8, wherein said locating step (26) is an algorithmic processing operation (31) of the pixels of the captured images (6, 7, 8, 9). type of recognition, the integration of the data of interest (29) with the metadata (15,16,17,18) is performed by calculation according to a recognition of at least one predefined form on the captured images (6 , 7,8,9). 20
[0012]
12. Method according to any one of claims 8 to 11, characterized in that said at least one criterion of interest (28) is at least one of the following criteria of interest relating to the aptitude of a territory of interest (27) for the rotorcraft (1): a landing zone, a refuge zone, a danger zone and an intervention zone.
[0013]
13. Method according to any one of claims 1 to 12, characterized in that the identification of said situation of loss of visibility (P) is performed according to at least one of the following operations: -) an operation of analysis (42) by calculation of the pixels defining the captured images (6,7,8,9), -) a comparison operation (43) by calculation between the current floor height of the rotorcraft and a height-ground threshold identified generator a situation of loss of visibility (Brown-out / White-out), and / or -) on request (40) of the pilot confronted with a situation of loss of visibility (P).
[0014]
14. Method according to any one of claims 1 to 13, characterized in that the method comprises a control operation (44) of the display of the reconstituted world (Mr) implementing the following steps: -) sample (53 ) sequentially by calculating at a given frequency a predefined number of front images (6), said sampled front images (55); -) calculating a sliding range (54) of a predefined number of said sampled front images (55); ), -) calculating a first pixel intensity variance of a median sampled front image (56) of said range and computationally comparing said first calculated variance with a first variance threshold (57) of pixel intensity, then -) in the case where said first calculated variance is greater than said first variance threshold (57), repeating the sampling step (53) of the front images (6), otherwise if not allowing truction and the dynamic display of the reconstituted world (Mr).
[0015]
15. The method of claim 14, characterized in that the method comprises an operation control (45) display of the extrapolated world (Mex) implementing the following steps: -) in the case of an authorization to build the reconstructed world (Mr) as a result of the control of the display of the reconstituted world (Mr), calculating second variances (58) of intensity of the pixels of the captured images (6,7,8,9) respectively captured at a given instant by the different sets of cameras (2,3,4,5) and io considered in pairs' -) computationally comparing said second variances (58) with a second variance threshold (59) of pixel intensity, then in the case where at least one of said second variances (58) is greater than the second variance threshold (59), reiterating the display operation (44) of display of the reconstituted world (Mr), otherwise in the opposite case allow the construction and display of the extrapolated world (Mex) then -) comparing said second variances (58) with a third pixel intensity variance threshold (60) of a value greater than said second variance threshold (59), then in the case where said second variances (58) are greater at the third threshold of variance (60), prohibit the construction and display of the extrapolated world (Mex) by maintaining the execution of the control operation (44) of the reconstructed world display (Mr), otherwise in the In contrast, the construction and the dynamic display of the extrapolated world (Mex) are maintained.
类似技术:
公开号 | 公开日 | 专利标题
Zhang et al.2018|Laser–visual–inertial odometry and mapping with high robustness and low drift
EP2400460B1|2012-10-03|Method for assessing the horizontal speed of a drone, particularly of a drone capable of hovering on automatic pilot
EP2896936B1|2016-04-13|Navigation assistance for helicopter through real time and/or delayed dynamic display of a representation of the exterior world built during flight
US9617011B2|2017-04-11|Probabilistic safe landing area determination
EP0438947B1|1994-02-16|On-board system for determining the position of an air vehicle and its applications
US10593108B2|2020-03-17|Converting digital aerial images into a three-dimensional representation utilizing processing clusters
US20160283774A1|2016-09-29|Cloud feature detection
Unger et al.2014|UAV-based photogrammetry: monitoring of a building zone
FR3024127A1|2016-01-29|AUTONOMOUS AUTOMATIC LANDING METHOD AND SYSTEM
Bajpai et al.2016|Planetary monocular simultaneous localization and mapping
Zhang et al.2015|Visual–inertial combined odometry system for aerial vehicles
Suziedelyte Visockiene et al.2014|Comparison of UAV images processing softwares
Zhang et al.2015|INS assisted monocular visual odometry for aerial vehicles
CN109284653A|2019-01-29|Slender body detection based on computer vision
FR3033903A1|2016-09-23|NAVIGATION ASSISTANCE SYSTEM FOR AN AIRCRAFT WITH HIGH HEAD DISPLAY SCREEN AND CAMERA.
FR3088308A1|2020-05-15|METHOD FOR MEASURING A LEVEL OF WEAR OF A VEHICLE TIRE.
DE112018006730T5|2020-10-01|CONTROL DEVICE AND CONTROL METHOD, PROGRAM AND MOBILE OBJECT
US20190354741A1|2019-11-21|Geo-registering an aerial image by an object detection model using machine learning
Dill2015|GPS/Optical/Inertial Integration for 3D Navigation and Mapping Using Multi-copter Platforms
FR3073655B1|2019-10-18|METHOD FOR DETERMINING A VISUAL SPEED VECTOR OF A MOBILE MACHINE, COMPUTER PROGRAM PRODUCT, AND DETERMINING SYSTEM THEREOF
Kalisperakis et al.2020|A MODULAR MOBILE MAPPING PLATFORM FOR COMPLEX INDOOR AND OUTDOOR ENVIRONMENTS.
KR20170108552A|2017-09-27|Information system for analysis of waterfront structure damage
Jurevičius et al.2019|A data set of aerial imagery from robotics simulator for map-based localization systems benchmark
WO2016005535A2|2016-01-14|Airborne optoelectronic equipment for imaging, monitoring and/or designating targets
Schön et al.2016|3D reconstruction of buildings and landscapes–Evaluation of quality parameters and optimization of mission planning using RPAS
同族专利:
公开号 | 公开日
US9441972B2|2016-09-13|
US20150204672A1|2015-07-23|
EP2896936A1|2015-07-22|
EP2896936B1|2016-04-13|
FR3016694B1|2016-01-22|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US7642929B1|2007-04-19|2010-01-05|The United States Of America As Represented By The Secretary Of The Air Force|Helicopter brown-out landing|
DE102004051625B4|2004-10-23|2006-08-17|Eads Deutschland Gmbh|Pilot support procedure for helicopter landings in visual flight under brown-out or white-out conditions|
US8019490B2|2006-09-29|2011-09-13|Applied Minds, Llc|Imaging and display system to aid helicopter landings in brownout conditions|
AT544048T|2008-06-09|2012-02-15|Honeywell Int Inc|NAVIGATION PROCESS AND SYSTEM USING REGIONAL FEATURES|
EP2828148A4|2012-03-20|2015-12-09|Crane Cohasset Holdings Llc|Image monitoring and display from unmanned vehicle|CA2954671A1|2014-07-10|2016-01-14|Breeze-Eastern Llc|Helicopter hoist systems, devices, and methodologies|
US20170309060A1|2016-04-21|2017-10-26|Honeywell International Inc.|Cockpit display for degraded visual environmentusing millimeter wave radar |
US20170334579A1|2016-05-17|2017-11-23|David Thomas Hartkop|Multi Sensor Support Structure|
US20200357289A1|2018-01-25|2020-11-12|Saab Ab|Method and system for updating a pilot situational awareness system|
法律状态:
2015-01-21| PLFP| Fee payment|Year of fee payment: 2 |
2016-01-21| PLFP| Fee payment|Year of fee payment: 3 |
2017-01-20| PLFP| Fee payment|Year of fee payment: 4 |
2018-01-19| PLFP| Fee payment|Year of fee payment: 5 |
2020-01-21| PLFP| Fee payment|Year of fee payment: 7 |
2021-01-21| PLFP| Fee payment|Year of fee payment: 8 |
2022-01-19| PLFP| Fee payment|Year of fee payment: 9 |
优先权:
申请号 | 申请日 | 专利标题
FR1400103A|FR3016694B1|2014-01-20|2014-01-20|NAVIGATION ASSISTANCE METHOD FOR GIRAVION, BY DYNAMIC DISPLAY OF A REPRESENTATION OF THE WORLD EXTERIOR CONSTRUCTED IN FLIGHT INSTANTLY AND / OR IN DIFFERENCE|FR1400103A| FR3016694B1|2014-01-20|2014-01-20|NAVIGATION ASSISTANCE METHOD FOR GIRAVION, BY DYNAMIC DISPLAY OF A REPRESENTATION OF THE WORLD EXTERIOR CONSTRUCTED IN FLIGHT INSTANTLY AND / OR IN DIFFERENCE|
EP15000007.3A| EP2896936B1|2014-01-20|2015-01-06|Navigation assistance for helicopter through real time and/or delayed dynamic display of a representation of the exterior world built during flight|
US14/597,407| US9441972B2|2014-01-20|2015-01-15|Method of assisting in the navigation of a rotorcraft by dynamically displaying a representation of the outside world constructed in flight in instantaneous and/or deferred manner|
[返回顶部]